6 research outputs found

    Lossy Compression and Its Application on Large Scale Scientific Datasets

    Get PDF
    High Performance Computing (HPC) applications are always expanding in data size and computational complexity. It is becoming necessary to consider fault tolerance and system recovery to reduce computation and resource cost in HPC systems. The computation of modern large scale HPC applications are facing bottleneck due to computation complexities, increased runtime and large data storage requirements. These issues can not be ignored in current supercomputing era. Data compression is one of the effective ways to address data storage issue. Among data compression, the lossy compression is much more feasible and efficient than the traditional lossless compression due to low I/O bandwidth of large applications. The goal of this work is to observe and find the optimal lossy compression configuration which has the minimal user controlled error with maximum compression ratio. For this purpose two large scale application have been experimented with various parameters of well known compression method called SZ. The first application is a quantum chemistry based HPC application NWChem. The second application is the vascular blood flow simulation data generated by parallel lattice Boltzmann code for fluid flow simulations with complex geometries called HemeLB. SZ compressor is integrated in the applications\u27 code for testing the correctness and scalability and give a comparative picture of the performance change. Lastly the statistical methods are tested to pre-determine the data distortion for different error bounds

    LiDAR and Camera Detection Fusion in a Real Time Industrial Multi-Sensor Collision Avoidance System

    Full text link
    Collision avoidance is a critical task in many applications, such as ADAS (advanced driver-assistance systems), industrial automation and robotics. In an industrial automation setting, certain areas should be off limits to an automated vehicle for protection of people and high-valued assets. These areas can be quarantined by mapping (e.g., GPS) or via beacons that delineate a no-entry area. We propose a delineation method where the industrial vehicle utilizes a LiDAR {(Light Detection and Ranging)} and a single color camera to detect passive beacons and model-predictive control to stop the vehicle from entering a restricted space. The beacons are standard orange traffic cones with a highly reflective vertical pole attached. The LiDAR can readily detect these beacons, but suffers from false positives due to other reflective surfaces such as worker safety vests. Herein, we put forth a method for reducing false positive detection from the LiDAR by projecting the beacons in the camera imagery via a deep learning method and validating the detection using a neural network-learned projection from the camera to the LiDAR space. Experimental data collected at Mississippi State University's Center for Advanced Vehicular Systems (CAVS) shows the effectiveness of the proposed system in keeping the true detection while mitigating false positives.Comment: 34 page

    Object detection using feature extraction and deep learning for advanced driver assistance systems

    Get PDF
    A comparison of performance between tradition support vector machine (SVM), single kernel, multiple kernel learning (MKL), and modern deep learning (DL) classifiers are observed in this thesis. The goal is to implement different machine-learning classification system for object detection of three-dimensional (3D) Light Detection and Ranging (LiDAR) data. The linear SVM, non linear single kernel, and MKL requires hand crafted features for training and testing their algorithm. The DL approach learns the features itself and trains the algorithm. At the end of these studies, an assessment of all the different classification methods are shown

    Object Detection Using Feature Extraction and Deep Learning for Advanced Driver Assistance Systems

    No full text
    A comparison of performance between tradition support vector machine (SVM), single kernel, multiple kernel learning (MKL), and modern deep learning (DL) classifiers are observed in this thesis. The goal is to implement different machine-learning classification system for object detection of three-dimensional (3D) Light Detection and Ranging (LiDAR) data. The linear SVM, non linear single kernel, and MKL requires hand crafted features for training and testing their algorithm. The DL approach learns the features itself and trains the algorithm. At the end of these studies, an assessment of all the different classification methods are shown

    LiDAR and Camera Detection Fusion in a Real-Time Industrial Multi-Sensor Collision Avoidance System

    No full text
    Collision avoidance is a critical task in many applications, such as ADAS (advanced driver-assistance systems), industrial automation and robotics. In an industrial automation setting, certain areas should be off limits to an automated vehicle for protection of people and high-valued assets. These areas can be quarantined by mapping (e.g., GPS) or via beacons that delineate a no-entry area. We propose a delineation method where the industrial vehicle utilizes a LiDAR (Light Detection and Ranging) and a single color camera to detect passive beacons and model-predictive control to stop the vehicle from entering a restricted space. The beacons are standard orange traffic cones with a highly reflective vertical pole attached. The LiDAR can readily detect these beacons, but suffers from false positives due to other reflective surfaces such as worker safety vests. Herein, we put forth a method for reducing false positive detection from the LiDAR by projecting the beacons in the camera imagery via a deep learning method and validating the detection using a neural network-learned projection from the camera to the LiDAR space. Experimental data collected at Mississippi State University’s Center for Advanced Vehicular Systems (CAVS) shows the effectiveness of the proposed system in keeping the true detection while mitigating false positives

    Analyzing the Performance and Accuracy of Lossy Checkpointing on Sub-Iteration of NWChem

    No full text
    Future exascale systems are expected to be characterized by more frequent failures than current petascale systems. This places increased importance on the application to minimize the amount of time wasted due to recompution when recovering from a checkpoint. Typically HPC application checkpoint at iteration boundaries. However, for applications that have a high per-iteration cost, checkpointing inside the iteration limits the amount of re-computation. This paper analyzes the performance and accuracy of using lossy compressed check-pointing in the computational chemistry application NWChem. Our results indicate that lossy compression is an effective tool for reducing the sub-iteration checkpoint size. Moreover, compression error tolerances that yield acceptable deviation in accuracy and iteration count are quantified
    corecore